ldpc code
Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
Low-density parity-check (LDPC) codes have been successfully commercialized in communication systems due to their strong error correction capabilities and simple decoding process. However, the error-floor phenomenon of LDPC codes, in which the error rate stops decreasing rapidly at a certain level, presents challenges for achieving extremely low error rates and deploying LDPC codes in scenarios demanding ultra-high reliability. In this work, we propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect. First, by leveraging the boosting learning technique of ensemble networks, we divide the decoding network into two neural decoders and train the post decoder to be specialized for uncorrected words that the first decoder fails to correct. Secondly, to address the vanishing gradient issue in training, we introduce a block-wise training schedule that locally trains a block of weights while retraining the preceding block. Lastly, we show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights. By applying these training methods to standard LDPC codes, we achieve the best error-floor performance compared to other decoding methods. The proposed NMS decoder, optimized solely through novel training methods without additional modules, can be integrated into existing LDPC decoders without incurring extra hardware costs. The source code is available at https://github.com/ghy1228/LDPC
a9be4c2a4041cadbf9d61ae16dd1389e-AuthorFeedback.pdf
In all cases, the training exploded, similar to the no-threshold vanilla arctanh (l.209,214-215). However in our case, in many of the units, the value explodes at infinity. In the paper (l.203), we ran BP experiments until Running without the absolute value is part of the ablation. Please see the ablation analysis (l.205-216) and note that: (i) hypernetworks allow us to adapt Arikan, "Polar codes: A pipelined implementation"), which makes use of the structure of polar GallagerB, MSA, SP A), while we learn the node activations from scratch. In both codes, our performance is better across all SNR. SNR=5.5 and SNR=6, we obtain a third of their bit error rate (
Learning to Decode: Reinforcement Learning for Decoding of Sparse Graph-Based Channel Codes
We show in this work that reinforcement learning can be successfully applied to decoding short to moderate length sparse graph-based channel codes. Specifically, we focus on low-density parity check (LDPC) codes, which for example have been standardized in the context of 5G cellular communication systems due to their excellent error correcting performance. These codes are typically decoded via belief propagation iterative decoding on the corresponding bipartite (Tanner) graph of the code via flooding, i.e., all check and variable nodes in the Tanner graph are updated at once. In contrast, in this paper we utilize a sequential update policy which selects the optimum check node (CN) scheduling in order to improve decoding performance. In particular, we model the CN update process as a multi-armed bandit process with dependent arms and employ a Q-learning scheme for optimizing the CN scheduling policy. In order to reduce the learning complexity, we propose a novel graph-induced CN clustering approach to partition the state space in such a way that dependencies between clusters are minimized. Our results show that compared to other decoding approaches from the literature, the proposed reinforcement learning scheme not only significantly improves the decoding performance, but also reduces the decoding complexity dramatically once the scheduling policy is learned.
- North America > United States > Wisconsin > Eau Claire County > Eau Claire (0.14)
- North America > United States > New Jersey > Essex County > Newark (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Europe > Italy (0.04)
- Asia > South Korea > Ulsan > Ulsan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Europe > Italy (0.04)
- Asia > South Korea > Ulsan > Ulsan (0.04)
a9be4c2a4041cadbf9d61ae16dd1389e-AuthorFeedback.pdf
In all cases, the training exploded, similar to the no-threshold vanilla arctanh (l.209,214-215). However in our case, in many of the units, the value explodes at infinity. In the paper (l.203), we ran BP experiments until Running without the absolute value is part of the ablation. Please see the ablation analysis (l.205-216) and note that: (i) hypernetworks allow us to adapt Arikan, "Polar codes: A pipelined implementation"), which makes use of the structure of polar GallagerB, MSA, SP A), while we learn the node activations from scratch. In both codes, our performance is better across all SNR. SNR=5.5 and SNR=6, we obtain a third of their bit error rate (
- North America > United States > Wisconsin > Eau Claire County > Eau Claire (0.14)
- North America > United States > New Jersey > Essex County > Newark (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Goal-Oriented Source Coding using LDPC Codes for Compressed-Domain Image Classification
In the emerging field of goal-oriented communications, the focus has shifted from reconstructing data to directly performing specific learning tasks, such as classification, segmentation, or pattern recognition, on the received coded data. In the commonly studied scenario of classification from compressed images, a key objective is to enable learning directly on entropy-coded data, thereby bypassing the computationally intensive step of data reconstruction. Conventional entropy-coding methods, such as Huffman and Arithmetic coding, are effective for compression but disrupt the data structure, making them less suitable for direct learning without decoding. This paper investigates the use of low-density parity-check (LDPC) codes -- originally designed for channel coding -- as an alternative entropy-coding approach. It is hypothesized that the structured nature of LDPC codes can be leveraged more effectively by deep learning models for tasks like classification. At the receiver side, gated recurrent unit (GRU) models are trained to perform image classification directly on LDPC-coded data. Experiments on datasets like MNIST, Fashion-MNIST, and CIFAR show that LDPC codes outperform Huffman and Arithmetic coding in classification tasks, while requiring significantly smaller learning models. Furthermore, the paper analyzes why LDPC codes preserve data structure more effectively than traditional entropy-coding techniques and explores the impact of key code parameters on classification performance. These results suggest that LDPC-based entropy coding offers an optimal balance between learning efficiency and model complexity, eliminating the need for prior decoding.
Review for NeurIPS paper: Learning to Decode: Reinforcement Learning for Decoding of Sparse Graph-Based Channel Codes
Strengths: LDPC code is an indispensable building block for LTE/5G communication systems, a more efficient and accurate decoding algorithm is impactful for current communication systems. Node-wise scheduling (NS) is known to improve decoding efficiency, yet incurs more complexity. Using Q-learning Table the computation complexity improves, which makes the NS-based method become viable. The long block length nature of LDPC code, makes the number of state exponential. The author uses clustered based method to reduce the number of potential state.
5G LDPC Linear Transformer for Channel Decoding
Hernandez, Mario, Pinero, Fernando
This work introduces a novel, fully differentiable linear-time complexity transformer decoder and a transformer decoder to correct 5G New Radio (NR) LDPC. We propose a scalable approach to decode linear block codes with $O(n)$ complexity rather than $O(n^2)$ for regular transformers. The architectures' performances are compared to Belief Propagation (BP), the production-level decoding algorithm used for 5G New Radio (NR) LDPC codes. We achieve bit error rate performance that matches a regular Transformer decoder and surpases one iteration BP, also achieving competitive time performance against BP, even for larger block codes. We utilize Sionna, Nvidia's 5G & 6G physical layer research software, for reproducible results.
- North America > Puerto Rico (0.05)
- Europe > Sweden > Stockholm > Stockholm (0.04)